499 research outputs found

    A Unifying Variational Perspective on Some Fundamental Information Theoretic Inequalities

    Full text link
    This paper proposes a unifying variational approach for proving and extending some fundamental information theoretic inequalities. Fundamental information theory results such as maximization of differential entropy, minimization of Fisher information (Cram\'er-Rao inequality), worst additive noise lemma, entropy power inequality (EPI), and extremal entropy inequality (EEI) are interpreted as functional problems and proved within the framework of calculus of variations. Several applications and possible extensions of the proposed results are briefly mentioned

    Learning How to Demodulate from Few Pilots via Meta-Learning

    Full text link
    Consider an Internet-of-Things (IoT) scenario in which devices transmit sporadically using short packets with few pilot symbols. Each device transmits over a fading channel and is characterized by an amplifier with a unique non-linear transfer function. The number of pilots is generally insufficient to obtain an accurate estimate of the end-to-end channel, which includes the effects of fading and of the amplifier's distortion. This paper proposes to tackle this problem using meta-learning. Accordingly, pilots from previous IoT transmissions are used as meta-training in order to learn a demodulator that is able to quickly adapt to new end-to-end channel conditions from few pilots. Numerical results validate the advantages of the approach as compared to training schemes that either do not leverage prior transmissions or apply a standard learning algorithm on previously received data

    The Very Dark Side of Internal Capital Markets: Evidence from Diversified Business Groups in Korea

    Get PDF
    This paper examines the capital allocation within Korean chaebol firms during the period from 1991 to 2000. We find strong evidence that, during the pre-Asian financial crisis period in the early 1990's, poorly performing firms with less investment opportunities invest more than well-performing firms with better growth opportunities. We also find the evidence of cross-subsidization among firms in the same chaebol group during the pre-crisis period. It appears that the existence of the "dark" side of internal capital markets explains most part of this striking phenomenon where "tunneling" practice has been common during the pre-crisis period. However, the inefficient capital allocation seems to disappear after the crisis as banks gain more power and market disciplines inefficient chaebol firms.

    Quantum Conformal Prediction for Reliable Uncertainty Quantification in Quantum Machine Learning

    Full text link
    In this work, we aim at augmenting the decisions output by quantum models with "error bars" that provide finite-sample coverage guarantees. Quantum models implement implicit probabilistic predictors that produce multiple random decisions for each input through measurement shots. Randomness arises not only from the inherent stochasticity of quantum measurements, but also from quantum gate noise and quantum measurement noise caused by noisy hardware. Furthermore, quantum noise may be correlated across shots and it may present drifts in time. This paper proposes to leverage such randomness to define prediction sets for both classification and regression that provably capture the uncertainty of the model. The approach builds on probabilistic conformal prediction (PCP), while accounting for the unique features of quantum models. Among the key technical innovations, we introduce a new general class of non-conformity scores that address the presence of quantum noise, including possible drifts. Experimental results, using both simulators and current quantum computers, confirm the theoretical calibration guarantees of the proposed framework.Comment: added detailed discussion on quantum hardware nois

    Interpretable Prototype-based Graph Information Bottleneck

    Full text link
    The success of Graph Neural Networks (GNNs) has led to a need for understanding their decision-making process and providing explanations for their predictions, which has given rise to explainable AI (XAI) that offers transparent explanations for black-box models. Recently, the use of prototypes has successfully improved the explainability of models by learning prototypes to imply training graphs that affect the prediction. However, these approaches tend to provide prototypes with excessive information from the entire graph, leading to the exclusion of key substructures or the inclusion of irrelevant substructures, which can limit both the interpretability and the performance of the model in downstream tasks. In this work, we propose a novel framework of explainable GNNs, called interpretable Prototype-based Graph Information Bottleneck (PGIB) that incorporates prototype learning within the information bottleneck framework to provide prototypes with the key subgraph from the input graph that is important for the model prediction. This is the first work that incorporates prototype learning into the process of identifying the key subgraphs that have a critical impact on the prediction performance. Extensive experiments, including qualitative analysis, demonstrate that PGIB outperforms state-of-the-art methods in terms of both prediction performance and explainability.Comment: NeurIPS 202
    • …
    corecore